28 research outputs found

    PEGA: Personality-Guided Preference Aggregator for Ephemeral Group Recommendation

    Full text link
    Recently, making recommendations for ephemeral groups which contain dynamic users and few historic interactions have received an increasing number of attention. The main challenge of ephemeral group recommender is how to aggregate individual preferences to represent the group's overall preference. Score aggregation and preference aggregation are two commonly-used methods that adopt hand-craft predefined strategies and data-driven strategies, respectively. However, they neglect to take into account the importance of the individual inherent factors such as personality in the group. In addition, they fail to work well due to a small number of interactive records. To address these issues, we propose a Personality-Guided Preference Aggregator (PEGA) for ephemeral group recommendation. Concretely, we first adopt hyper-rectangle to define the concept of Group Personality. We then use the personality attention mechanism to aggregate group preferences. The role of personality in our approach is twofold: (1) To estimate individual users' importance in a group and provide explainability; (2) to alleviate the data sparsity issue that occurred in ephemeral groups. The experimental results demonstrate that our model significantly outperforms the state-of-the-art methods w.r.t. the score of both Recall and NDCG on Amazon and Yelp datasets

    EMID: An Emotional Aligned Dataset in Audio-Visual Modality

    Full text link
    In this paper, we propose Emotionally paired Music and Image Dataset (EMID), a novel dataset designed for the emotional matching of music and images, to facilitate auditory-visual cross-modal tasks such as generation and retrieval. Unlike existing approaches that primarily focus on semantic correlations or roughly divided emotional relations, EMID emphasizes the significance of emotional consistency between music and images using an advanced 13-dimension emotional model. By incorporating emotional alignment into the dataset, it aims to establish pairs that closely align with human perceptual understanding, thereby raising the performance of auditory-visual cross-modal tasks. We also design a supplemental module named EMI-Adapter to optimize existing cross-modal alignment methods. To validate the effectiveness of the EMID, we conduct a psychological experiment, which has demonstrated that considering the emotional relationship between the two modalities effectively improves the accuracy of matching in abstract perspective. This research lays the foundation for future cross-modal research in domains such as psychotherapy and contributes to advancing the understanding and utilization of emotions in cross-modal alignment. The EMID dataset is available at https://github.com/ecnu-aigc/EMID

    Siamese Object Tracking for Unmanned Aerial Vehicle: A Review and Comprehensive Analysis

    Full text link
    Unmanned aerial vehicle (UAV)-based visual object tracking has enabled a wide range of applications and attracted increasing attention in the field of intelligent transportation systems because of its versatility and effectiveness. As an emerging force in the revolutionary trend of deep learning, Siamese networks shine in UAV-based object tracking with their promising balance of accuracy, robustness, and speed. Thanks to the development of embedded processors and the gradual optimization of deep neural networks, Siamese trackers receive extensive research and realize preliminary combinations with UAVs. However, due to the UAV's limited onboard computational resources and the complex real-world circumstances, aerial tracking with Siamese networks still faces severe obstacles in many aspects. To further explore the deployment of Siamese networks in UAV-based tracking, this work presents a comprehensive review of leading-edge Siamese trackers, along with an exhaustive UAV-specific analysis based on the evaluation using a typical UAV onboard processor. Then, the onboard tests are conducted to validate the feasibility and efficacy of representative Siamese trackers in real-world UAV deployment. Furthermore, to better promote the development of the tracking community, this work analyzes the limitations of existing Siamese trackers and conducts additional experiments represented by low-illumination evaluations. In the end, prospects for the development of Siamese tracking for UAV-based intelligent transportation systems are deeply discussed. The unified framework of leading-edge Siamese trackers, i.e., code library, and the results of their experimental evaluations are available at https://github.com/vision4robotics/SiameseTracking4UAV

    Direct and indirect effects of climate on richness drive the latitudinal diversity gradient in forest trees

    Get PDF
    Data accessibility statement: Full census data are available upon reasonable request from the ForestGEO data portal, http://ctfs.si.edu/datarequest/ We thank Margie Mayfield, three anonymous reviewers and Jacob Weiner for constructive comments on the manuscript. This study was financially supported by the National Key R&D Program of China (2017YFC0506100), the National Natural Science Foundation of China (31622014 and 31570426), and the Fundamental Research Funds for the Central Universities (17lgzd24) to CC. XW was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB3103). DS was supported by the Czech Science Foundation (grant no. 16-26369S). Yves Rosseel provided us valuable suggestions on using the lavaan package conducting SEM analyses. Funding and citation information for each forest plot is available in the Supplementary Information Text 1.Peer reviewedPostprin

    Sentiment Analysis of Chinese Product Reviews Based on Fusion of DUAL-Channel BiLSTM and Self-Attention

    No full text
    Product reviews provide crucial information for both consumers and businesses, offering insights needed before purchasing a product or service. However, existing sentiment analysis methods, especially for Chinese language, struggle to effectively capture contextual information due to the complex semantics, multiple sentiment polarities, and long-term dependencies between words. In this paper, we propose a sentiment classification method based on the BiLSTM algorithm to address these challenges in natural language processing. Self-Attention-CNN BiLSTM (SAC-BiLSTM) leverages dual channels to extract features from both character-level embeddings and word-level embeddings. It combines BiLSTM and Self-Attention mechanisms for feature extraction and weight allocation, aiming to overcome the limitations in mining contextual information. Experiments were conducted on the onlineshopping10cats dataset, which is a standard corpus of e-commerce shopping reviews available in the ChineseNlpCorpus 2018. The experimental results demonstrate the effectiveness of our proposed algorithm, with Recall, Precision, and F1 scores reaching 0.9409, 0.9369, and 0.9404, respectively

    NFAT5 is cleaved by viral proteases 2A and 3C.

    No full text
    <p>HeLa cells were treated with 10 μM MG132 <b>(A)</b> or 25 μM z-VAD-fmk <b>(B)</b> and then infected with CVB3 at an MOI of 10. At indicated time points pi, the cellular proteins were subjected to Western blot analysis of NFAT5 and other proteins using the indicated antibodies. β-actin was used as a loading control. <b>(C)</b> HeLa cells were infected by CVB3 at an MOI of 10 for 4 and 6 h and then subjected to Western blot analysis using an antibody against the N-terminal epitope of NFAT5. <b>(D)</b> HeLa cells transfected with a plasmid expressing the 6*myc-NFAT5 fusion protein (upper panel) were infected with CVB3 or sham-infected as described above and subjected to Western blot analysis using an antibody against myc tag (lower panel). <b>(E)</b> HeLa cells expressing myc-NFAT5 were transfected with pIRES-2A (2A), pIRES-3C (3C) or vector only (V). At 36 h pt, the cells were subjected to Western blot analysis using an antibody against myc tag. The cells infected with CVB3 or sham-infected with PBS were used as controls. Arrows indicate the 3C cleavage bands. <b>(F)</b> Tissue homogenate from mouse heart was incubated with recombinant 2A or 3C for 8 h. Then the mouse NFAT5 (mNFAT5) N-terminal epitope was detected by Western blot using a specific antibody.</p

    Hypertonic mannitol solutions inhibit CVB3 replication and promote cell survival during CVB3 infection.

    No full text
    <p>HeLa cells were treated with 100 mM, 200 mM mannitol solution or PBS and then subjected to CVB3 infection or sham-infection. At 4 and 6 h pi, cells were subjected to Western blot analysis of VP1 (<b>A</b>), phase contrast microscopy imaging (<b>B</b>) and MTS cell viability assay (<b>C</b>). The cell viability was determined by converting to the percentage of cell survival of the sham-infection control, which was set as 100%. Three biological replicates were performed for each assay and the result was subjected to statistical analysis.</p

    CVB3 infection reduces NFAT5 protein but not mRNA.

    No full text
    <p>SV40 human cardiomyocytes and HeLa cells were infected with CVB3 at an MOI of 40 or 10, respectively, or sham-infected with PBS and harvested at the indicated time points pi. Cellular proteins and RNAs were extracted for Western blot analysis of NFAT5 protein <b>(A, C)</b> and qPCR measurement of NFAT5 mRNA <b>(B, D)</b>, respectively. In qPCR assay, the result was shown as the relative level of each mRNA normalized to the level of GAPDH mRNA in the sample. Three biological replicates were performed for each assay. <b>(E)</b> 4-week-old A/J mice were infected with CVB3 at 10<sup>5</sup> pfu (plaque forming unit) or sham-infected with PBS. At 6 days pi, the mice were sacrificed and the heart tissue was homogenized for Western blot analysis of NFAT5 protein. β-actin was used as a loading control. Quantitation of NFAT5 protein was conducted by densitometry analysis using the NIH ImageJ program (right panel). Three biological replicates were performed and the result was subjected to statistical analysis.</p
    corecore